irregularly-sampled time sery
Latent Ordinary Differential Equations for Irregularly-Sampled Time Series
Time series with non-uniform intervals occur in many applications, and are difficult to model using standard recurrent neural networks (RNNs). We generalize RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model we call ODE-RNNs. Furthermore, we use ODE-RNNs to replace the recognition network of the recently-proposed Latent ODE model. Both ODE-RNNs and Latent ODEs can naturally handle arbitrary time gaps between observations, and can explicitly model the probability of observation times using Poisson processes. We show experimentally that these ODE-based models outperform their RNN-based counterparts on irregularly-sampled data.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Data Science (0.93)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Reviews: Latent Ordinary Differential Equations for Irregularly-Sampled Time Series
Update after rebuttal: Thank you for your response. The inclusion of some more references, error bars, and hyperparameter details for experiments make the paper stronger. I have raised my score to an 8. Original review: This is a good paper. I have put a score of 7 but I'm happy to raise this to 8 if the authors address point 1 under "notes and questions" below and cite some earlier ODE-adjoint literature. Originality - The combination of RNNs and neural ODEs is novel, as is the combination to form an encoder-decoder model with continuous-time latent state evolution.
Latent Ordinary Differential Equations for Irregularly-Sampled Time Series
Time series with non-uniform intervals occur in many applications, and are difficult to model using standard recurrent neural networks (RNNs). We generalize RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model we call ODE-RNNs. Furthermore, we use ODE-RNNs to replace the recognition network of the recently-proposed Latent ODE model. Both ODE-RNNs and Latent ODEs can naturally handle arbitrary time gaps between observations, and can explicitly model the probability of observation times using Poisson processes. We show experimentally that these ODE-based models outperform their RNN-based counterparts on irregularly-sampled data.
TrajGPT: Irregular Time-Series Representation Learning for Health Trajectory Analysis
Song, Ziyang, Lu, Qingcheng, Zhu, He, Buckeridge, David, Li, Yue
In many domains, such as healthcare, time-series data is often irregularly sampled with varying intervals between observations. This poses challenges for classical time-series models that require equally spaced data. To address this, we propose a novel time-series Transformer called Trajectory Generative Pre-trained Transformer (TrajGPT). TrajGPT employs a novel Selective Recurrent Attention (SRA) mechanism, which utilizes a data-dependent decay to adaptively filter out irrelevant past information based on contexts. By interpreting TrajGPT as discretized ordinary differential equations (ODEs), it effectively captures the underlying continuous dynamics and enables time-specific inference for forecasting arbitrary target timesteps. Experimental results demonstrate that TrajGPT excels in trajectory forecasting, drug usage prediction, and phenotype classification without requiring task-specific fine-tuning. By evolving the learned continuous dynamics, TrajGPT can interpolate and extrapolate disease risk trajectories from partially-observed time series. The visualization of predicted health trajectories shows that TrajGPT forecasts unseen diseases based on the history of clinically relevant phenotypes (i.e., contexts). Time-series representation learning plays a crucial role in various domains, as it facilitates the extraction of generalizable temporal patterns from large-scale, unlabeled data, which can then be adapted for diverse tasks (Ma et al., 2023).
- North America > Canada > Quebec > Montreal (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- Asia > Middle East > Israel > Southern District (0.04)
- Health & Medicine > Consumer Health (1.00)
- Health & Medicine > Therapeutic Area > Endocrinology > Diabetes (0.48)
- Health & Medicine > Health Care Technology > Medical Record (0.47)
- Health & Medicine > Therapeutic Area > Cardiology/Vascular Diseases (0.46)
Deep learning for predicting the occurrence of tipping points
Zhuge, Chengzuo, Li, Jiawei, Chen, Wei
Tipping points occur in many real-world systems, at which the system shifts suddenly from one state to another. The ability to predict the occurrence of tipping points from time series data remains an outstanding challenge and a major interest in a broad range of research fields. Particularly, the widely used methods based on bifurcation theory are neither reliable in prediction accuracy nor applicable for irregularly-sampled time series which are commonly observed from real-world systems. Here we address this challenge by developing a deep learning algorithm for predicting the occurrence of tipping points in untrained systems, by exploiting information about normal forms. Our algorithm not only outperforms traditional methods for regularly-sampled model time series but also achieves accurate predictions for irregularly-sampled model time series and empirical time series. Our ability to predict tipping points for complex systems paves the way for mitigation risks, prevention of catastrophic failures, and restoration of degraded systems, with broad applications in social science, engineering, and biology.
- Asia > China > Beijing > Beijing (0.05)
- Atlantic Ocean > Mediterranean Sea (0.04)
- South America > Uruguay > Artigas > Artigas (0.04)
- (3 more...)
- Health & Medicine (1.00)
- Energy (0.93)
A scalable end-to-end Gaussian process adapter for irregularly sampled time series classification
We present a general framework for classification of sparse and irregularly-sampled time series. The properties of such time series can result in substantial uncertainty about the values of the underlying temporal processes, while making the data difficult to deal with using standard classification methods that assume fixeddimensional feature spaces. To address these challenges, we propose an uncertaintyaware classification framework based on a special computational layer we refer to as the Gaussian process adapter that can connect irregularly sampled time series data to any black-box classifier learnable using gradient descent. We show how to scale up the required computations based on combining the structured kernel interpolation framework and the Lanczos approximation method, and how to discriminatively train the Gaussian process adapter in combination with a number of classifiers end-to-end using backpropagation.
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- Europe > Spain > Catalonia > Barcelona Province > Barcelona (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning > Gradient Descent (0.48)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.46)
Learning from Irregularly-Sampled Time Series: A Missing Data Perspective
Li, Steven Cheng-Xian, Marlin, Benjamin M.
Irregularly-sampled time series occur in many domains including healthcare. They can be challenging to model because they do not naturally yield a fixed-dimensional representation as required by many standard machine learning models. In this paper, we consider irregular sampling from the perspective of missing data. We model observed irregularly-sampled time series data as a sequence of index-value pairs sampled from a continuous but unobserved function. We introduce an encoder-decoder framework for learning from such generic indexed sequences. We propose learning methods for this framework based on variational autoencoders and generative adversarial networks. For continuous irregularly-sampled time series, we introduce continuous convolutional layers that can efficiently interface with existing neural network architectures. Experiments show that our models are able to achieve competitive or better classification results on irregularly-sampled multivariate time series compared to recent RNN models while offering significantly faster training times.
Latent Ordinary Differential Equations for Irregularly-Sampled Time Series
Rubanova, Yulia, Chen, Tian Qi, Duvenaud, David K.
Time series with non-uniform intervals occur in many applications, and are difficult to model using standard recurrent neural networks (RNNs). We generalize RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model we call ODE-RNNs. Furthermore, we use ODE-RNNs to replace the recognition network of the recently-proposed Latent ODE model. Both ODE-RNNs and Latent ODEs can naturally handle arbitrary time gaps between observations, and can explicitly model the probability of observation times using Poisson processes. We show experimentally that these ODE-based models outperform their RNN-based counterparts on irregularly-sampled data.
Set Functions for Time Series
Horn, Max, Moor, Michael, Bock, Christian, Rieck, Bastian, Borgwardt, Karsten
Nevertheless, in many application domains, in particular healthcare (Y adav et al., 2018), measurements might not necessarily be observed at a regular rate or could be misaligned. Moreover, the presence or absence of a measurement and its observation frequency may carry information of its own (Little & Rubin, 2014), such that imputing the missing values is not always desired. While some algorithms can be readily applied to datasets with varying length, these methods usually assume regular sampling of the data and/or require the measurements across modalities to be aligned/synchronized, preventing their application to the aforementioned settings. Existing approaches for unaligned measurements, by contrast, typically rely on imputation to obtain a regularly-sampled version of a data set for classification. Learning a suitable imputation scheme, however, requires understanding the underlying dynamics of a system; this task is significantly more complicated and not necessarily required when classification is the main goal. Furthermore, even though a decoupled imputation scheme followed by classification is generally more scalable, it may lose information that is relevant for prediction tasks. Approaches that jointly optimize both tasks add a large computational overhead, thus suffering from poor scalability or high memory requirements. Our method is motivated by the understanding that, while RNNs and similar architectures are well suited for capturing and modelling the dynamics of a time series and thus excel at tasks such as forecasting, retaining the order of an input sequence can even be a disadvantage in classification scen-1 arXiv:1909.12064v1